202 research outputs found

    Knowledge Extraction from Natural Language Requirements into a Semantic Relation Graph

    Get PDF
    Knowledge extraction and representation aims to identify information and to transform it into a machine-readable format. Knowledge representations support Information Retrieval tasks such as searching for single statements, documents, or metadata. Requirements specifications of complex systems such as automotive software systems are usually divided into different subsystem specifications. Nevertheless, there are semantic relations between individual documents of the separated subsystems, which have to be considered in further processes (e.g. dependencies). If requirements engineers or other developers are not aware of these relations, this can lead to inconsistencies or malfunctions of the overall system. Therefore, there is a strong need for tool support in order to detects semantic relations in a set of large natural language requirements specifications. In this work we present a knowledge extraction approach based on an explicit knowledge representation of the content of natural language requirements as a semantic relation graph. Our approach is fully automated and includes an NLP pipeline to transform unrestricted natural language requirements into a graph. We split the natural language into different parts and relate them to each other based on their semantic relation. In addition to semantic relations, other relationships can also be included in the graph. We envision to use a semantic search algorithm like spreading activation to allow users to search different semantic relations in the graph

    What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations

    Get PDF
    [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.TU Berlin, Open-Access-Mittel - 202

    Explainable software systems

    Get PDF
    Software and software-controlled technical systems play an increasing role in our daily lives. In cyber-physical systems, which connect the physical and the digital world, software does not only influence how we perceive and interact with our environment but software also makes decisions that influence our behavior. Therefore, the ability of software systems to explain their behavior and decisions will become an important property that will be crucial for their acceptance in our society. We call software systems with this ability explainable software systems. In the past, we have worked on methods and tools to design explainable software systems. In this article, we highlight some of our work on how to design explainable software systems. More specifically, we describe an architectural framework for designing self-explainable software systems, which is based on the MAPE-loop for self-adaptive systems. Afterward, we show that explainability is also important for tools that are used by engineers during the development of software systems. We show examples from the area of requirements engineering where we use techniques from natural language processing and neural networks to help engineers comprehend the complex information structures embedded in system requirements

    Soft-gluon Resummation for High-pT Inclusive-Hadron Production at COMPASS

    Get PDF
    We study the cross section for the photoproduction reaction gamma N -> h X in fixed-target scattering at COMPASS, where the hadron h is produced at large transverse momentum. We investigate the role played by higher-order QCD corrections to the cross section. In particular we address large logarithmic "threshold" corrections to the rapidity dependent partonic cross sections, which we resum to all orders at next-to-leading accuracy. In our comparison to the experimental data we find that the threshold contributions are large and improve the agreement between data and theoretical predictions significantly.Comment: 13 pages, 7 figures, journal versio

    Good RE artifacts? I know it when I use it!

    Get PDF
    The definition of high-quality or good RE artifacts is often provided through normative references, such as quality standards or text books (e.g., ISO/IEEE/IEC-29148). We see various problems of such normative references. Quality standards are incomplete. Several quality standards describe quality through a set of abstract criteria. When analyzing the characteristics in detail, we see that there are two different types of criteria: Some criteria, such as ambiguity, consistency, completeness, and singularity are factors that describe properties of the RE artifact itself. In contrast, feasibility, traceability and verifiability state that activities can be performed with the artifact. This is a small, yet important difference: While the former can be assessed by analyzing just the artifact by itself, the latter describe a relationship of the artifact in the context of its usage. Yet this usage context is incompletely represented in the quality standards: For example, why is it important that requirements can be implemented (feasible in the terminology of ISO-29148) and verified, but other activities, such as maintenance, are not part of the quality model? Therefore, we argue that normative standards do not take all activities into account systematically, and thus, are missing relevant quality factors. Quality standards are only implicitly context-dependent. One could go even further and ask about the value of some artifact-based properties such as singularity. A normative approach does not provide such rationales. This is different for activity-based properties, such as verifiability, since these properties are defined through their usage: If we need to verify the requirements, properties of the artifact that increase verifiability are important. If we do not need to verify this requirement, e.g., because we use the artifacts only for task management in an agile process, these properties might not necessarily be relevant. This example shows that, in contrast to the normative definition of quality in RE standards, RE quality usually depends on the context. Quality standards lack precise reasoning. For defining most of the aforementioned criteria, the standards remain abstract and vague. For some criteria, such as ambiguity, the standards provide a detailed lists of factors to avoid. However, these criteria have an imprecise relation to the abstract criteria mentioned above, and, consequently, the harm that they might potentially cause remains unclear

    Evaluation of a specification approach for vehicle functions using activity diagrams in requirements documents

    Get PDF
    Rising complexity of systems has long been a major challenge in requirements engineering. This manifests in more extensive and harder to understand requirements documents. At the Daimler AG, an approach is applied that combines the use of activity diagrams with natural language specifications to specify vehicle functions. The approach starts with an activity diagram that is created to get an early overview. The contained information is then transferred to a textual requirements document, where details are added and the behavior is refined. While the approach aims at reducing efforts needed to understand a function’s behavior, the application of the approach itself causes new challenges on its own. By examining existing specifications at Daimler, we identified nine categories of inconsistencies and deviations between activity diagrams and their textual representations. This paper extends a previous case study on the subject by presenting additional data we acquired. Our analysis indicates that a coexistence of textual and graphical representations of models without proper tool support results in inconsistencies and deviations
    • …
    corecore